13 research outputs found

    A CREDIT ANALYSIS OF THE UNBANKED AND UNDERBANKED: AN ARGUMENT FOR ALTERNATIVE DATA

    Get PDF
    The purpose of this study is to ascertain the statistical and economic significance of non-traditional credit data for individuals who do not have sufficient economic data, collectively known as the unbanked and underbanked. The consequences of not having sufficient economic information often determines whether unbanked and underbanked individuals will receive higher price of credit or be denied entirely. In terms of regulation, there is a strong interest in credit models that will inform policies on how to gradually move sections of the unbanked and underbanked population into the general financial network. In Chapter 2 of the dissertation, I establish the role of non-traditional credit data, known as alternative data, in modeling borrower default behavior for individuals who unbanked and underbanked individuals by taking a statistical approach. Further, using a combined traditional and alternative auto loan data, I am able to make statements about which alternative data variables contribute to borrower default behavior. Additionally, I devise a way to statistically test the goodness of fit metric for some machine learning classification models to ascertain whether the alternative data truly helps in the credit building process. In Chapter 3, I discuss the economic significance of incorporating alternative data in the credit modeling process. Using a maximum utility approach, I show that combining alternative and traditional data yields a higher profit for the lender, rather than using either data alone. Additionally, Chapter 3 advocates for the use of loss functions that aligns with a lender\u27s business objective of making a profit

    An Analysis of Accuracy using Logistic Regression and Time Series

    Get PDF
    This paper analyzes the accuracy rates for logistic regression and time series models. It also examines a relatively new performance index that takes into consideration the business assumptions of credit markets. Although prior research has focused on evaluation metrics, such as AUC and Gini index, this new measure has a more intuitive interpretation for various managers and decision makers and can be applied to both Logistic and Time Series models

    Regulatory Effects on Traditional Financial Systems Versus Blockchain and Emerging Financial Systems

    Get PDF
    The expansion of the Internet led to disruptive business and consumer processes, as existing regulations do not cover the scope and scale of emerging financial technologies. Using organization economic theory as the foundation, the purpose of this correlational study was to examine and compare the financial regulatory impact on traditional and emerging financial systems across a variety of factors including organizational type, predicted users, operational concerns, reasons for cost increases, and changes in business practices as a result of the regulatory environment. Data were collected through a survey of 227 adult Americans who engage in the financial sector and are familiar with the US regulatory environment. Data were analyzed using descriptive statistics, cross tabulations, and statistical significance was tested using Lambda and Kendall\u27s Tau c. The key finding of this study is that the effects of regulations are different for the traditional and emerging financial systems, showing the need to develop and implement policies that are context specific to the emerging financial systems. The recommendations from the study include suggestions to regulatory agencies to regulate and support emerging financial systems in line with new technology that envisions efficiency and economic fairness. The positive social change implications for this study include the development of a strategy that can ensure economic stability, reduce irregularities, and strengthen investments with a view of protecting the financial system from breakdown

    Driving Marketing Efficiency in the Age of Big Data: Analysis of Subprime Automotive Borrowers

    Get PDF
    Big Data methodologies are applied to understand subprime borrowers in the U.S. automobile space. The focus on the automobile market is essential as this subsegment is responsible for directly and indirectly employing over one million people and creating payrolls in excess of $100 billion annually in the U.S. It is found in this article that if a subprime borrower is a homeowner, the probability of repaying their auto loan increases by almost 4%. However, if the borrower is renting, the likelihood of repaying their auto loan increases by nearly 1.4%. Applying Big Data in making subprime auto loans can add 1000’s of jobs and improve security of millions of dollars in payroll

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Abstract Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries

    Non-Standard Errors

    Full text link
    In statistics, samples are drawn from a population in a datagenerating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    No full text
    URL des documents de travail : https://centredeconomiesorbonne.cnrs.fr/publications/Documents de travail du Centre d'Economie de la Sorbonne 2021.33 - ISSN : 1955-611XVoir aussi ce document de travail sur SSRN: https://ssrn.com/abstract=3981597In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants
    corecore